# Multimodal Image Reasoning
Llama 3.2 11b Vision R1 Distill
Llama 3.2-Vision is a multimodal large language model developed by Meta, supporting image and text inputs, optimized for visual recognition, image reasoning, and description tasks.
Image-to-Text
Transformers Supports Multiple Languages

L
bababababooey
29
1
Llama 3.2 11B Vision Instruct
Llama 3.2-Vision is a multimodal large language model developed by Meta, supporting both image and text inputs, capable of tasks such as visual recognition, image reasoning, and captioning.
Image-to-Text
Transformers Supports Multiple Languages

L
alpindale
3,057
15
Featured Recommended AI Models